Coverage for cpprb/__init__.py: 75%

Shortcuts on this page

r m x p   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

8 statements  

1""" 

2cpprb: Fast Flexible Replay Buffer Library 

3 

4cpprb provides replay buffer classes for reinforcement learning. 

5Details are described at `Home Page <https://ymd_h.gitlab.io/cpprb/>`_. 

6 

7Examples 

8-------- 

9Replay Buffer classes can be imported from ``cpprb`` package. 

10 

11>>> from cpprb import ReplayBuffer 

12 

13These buffer classes can be created by specifying ``env_dict``. 

14 

15>>> buffer_size = 1e+6 

16>>> env_dict = {"obs": {}, "act": {}, "rew": {}, "next_obs": {}, "done": {}} 

17>>> rb = ReplayBuffer(buffer_size, env_dict) 

18 

19When adding transitions, all values must be passed as keyword arguments. 

20 

21>>> rb.add(obs=1, act=1, rew=0.5, next_obs=2, done=0) 

22 

23You can also add multiple transitions simultaneously. 

24 

25>>> rb.add(obs=[1,2], act=[1,2], rew=[0.5,0.3], next_obs=[2,3], done=[0,1]) 

26 

27At the episode end, users must call ``on_episode_end()`` method. 

28 

29>>> rb.on_episode_end() 

30 

31Transitions can be sampled according to these buffer's algorithms (e.g. random). 

32 

33>>> sample = rb.sample(32) 

34""" 

35 

36from .PyReplayBuffer import (ReplayBuffer,PrioritizedReplayBuffer, 

37 MPReplayBuffer, MPPrioritizedReplayBuffer, 

38 SelectiveReplayBuffer, ReverseReplayBuffer) 

39 

40from .LaBER import LaBERmean, LaBERlazy, LaBERmax 

41from .HER import HindsightReplayBuffer 

42 

43from .PyReplayBuffer import create_buffer, train 

44 

45try: 

46 from .util import create_env_dict, create_before_add_func 

47except ImportError: 

48 # If gym is not installed, util functions are not defined. 

49 pass